vqSGD: Vector Quantized Stochastic Gradient Descent
نویسندگان
چکیده
In this work, we present a family of vector quantization schemes vqSGD (Vector-Quantized Stochastic Gradient Descent) that provide an asymptotic reduction in the communication cost with convergence guarantees first-order distributed optimization. process derive following fundamental information theoretic fact: $\Theta \left({\frac {d}{R^{2}}}\right)$ bits are necessary and sufficient (up to additive notation="LaTeX">$O(\log d)$ term) describe unbiased estimator notation="LaTeX">$\hat{\boldsymbol {g}}(\boldsymbol {g})$ for any notation="LaTeX">$\boldsymbol {g}$ notation="LaTeX">$d$ -dimensional unit sphere, under constraint notation="LaTeX">$\| \hat{\boldsymbol {g})\|_{2}\le R$ almost surely, notation="LaTeX">$R > 1$ . particular, consider randomized scheme based on convex hull point set, returns gradient surely bounded norm. We multiple efficient instances our scheme, near optimal, require notation="LaTeX">$o(d)$ at expense tolerable increase error. The obtained using well-known families binary error-correcting codes smooth tradeoff between estimation error quantization. Furthermore, show also offers automatic privacy guarantees.
منابع مشابه
Quantized Stochastic Gradient Descent: Communication versus Convergence
Parallel implementations of stochastic gradient descent (SGD) have received signif1 icant research attention, thanks to excellent scalability properties of this algorithm, 2 and to its efficiency in the context of training deep neural networks. A fundamental 3 barrier for parallelizing large-scale SGD is the fact that the cost of communicat4 ing the gradient updates between nodes can be very la...
متن کاملVariational Stochastic Gradient Descent
In Bayesian approach to probabilistic modeling of data we select a model for probabilities of data that depends on a continuous vector of parameters. For a given data set Bayesian theorem gives a probability distribution of the model parameters. Then the inference of outcomes and probabilities of new data could be found by averaging over the parameter distribution of the model, which is an intr...
متن کاملByzantine Stochastic Gradient Descent
This paper studies the problem of distributed stochastic optimization in an adversarial setting where, out of the m machines which allegedly compute stochastic gradients every iteration, an α-fraction are Byzantine, and can behave arbitrarily and adversarially. Our main result is a variant of stochastic gradient descent (SGD) which finds ε-approximate minimizers of convex functions in T = Õ ( 1...
متن کاملParallelized Stochastic Gradient Descent
With the increase in available data parallel machine learning has become an in-creasingly pressing problem. In this paper we present the first parallel stochasticgradient descent algorithm including a detailed analysis and experimental evi-dence. Unlike prior work on parallel optimization algorithms [5, 7] our variantcomes with parallel acceleration guarantees and it poses n...
متن کاملPreconditioned Stochastic Gradient Descent
Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Information Theory
سال: 2022
ISSN: ['0018-9448', '1557-9654']
DOI: https://doi.org/10.1109/tit.2022.3161620